83 research outputs found

    Compiling a simulation language in APL

    Full text link
    This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in APL Quote Quad, http://dx.doi.org/10.1145/327600.327625This paper describes the procedure used to build several compilers, written in APL and APL2, to translate two continuous simulation languages into APL and C++. The advantages and disadvantages of using APL to write a compiler are discussed. A compromise had to be found between performance (the model execution speed) and flexibility (the ease to modify parameters and test "what if" situations). The resulting compiler (an APL2 packaged workspace) has been used successfully to generate educational applications and in medical research.This paper has been sponsored by the Spanish Interdepartmental Commission of Science and Technology (CICYT), project number TIC-96-0723-C02-01

    Learning to Attend, Copy, and Generate for Session-Based Query Suggestion

    Full text link
    Users try to articulate their complex information needs during search sessions by reformulating their queries. To make this process more effective, search engines provide related queries to help users in specifying the information need in their search process. In this paper, we propose a customized sequence-to-sequence model for session-based query suggestion. In our model, we employ a query-aware attention mechanism to capture the structure of the session context. is enables us to control the scope of the session from which we infer the suggested next query, which helps not only handle the noisy data but also automatically detect session boundaries. Furthermore, we observe that, based on the user query reformulation behavior, within a single session a large portion of query terms is retained from the previously submitted queries and consists of mostly infrequent or unseen terms that are usually not included in the vocabulary. We therefore empower the decoder of our model to access the source words from the session context during decoding by incorporating a copy mechanism. Moreover, we propose evaluation metrics to assess the quality of the generative models for query suggestion. We conduct an extensive set of experiments and analysis. e results suggest that our model outperforms the baselines both in terms of the generating queries and scoring candidate queries for the task of query suggestion.Comment: Accepted to be published at The 26th ACM International Conference on Information and Knowledge Management (CIKM2017

    Adapting the automatic assessment of free-text answers to the students

    Get PDF
    In this paper, we present the first approach in the field of Computer Assisted Assessment (CAA) of students' free-text answers to model the student profiles. This approach has been implemented in a new version of Atenea, a system able to automatically assess students' short answers. The system has been improved so that it is now able to take into account the students' preferences and personal features to adapt not only the assessment process but also to personalize the appearance of the interface. In particular, it is now able to accept students’ answers written in Spanish or in English indistinctly, by means of Machine Translation. Moreover, we have observed that Atenea’s performance does not decrease drastically when combined with automatic translation, provided that the translation does not reduce greatly the variability in the vocabulary

    An Approach for Automatic Generation of on-line Information Systems based on the Integration of Natural Language Processing and Adaptive Hypermedia Techniques

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid. Escuela Politécnica Superior, Departamento de ingeniería informática. Fecha de lectura: 29-05-200

    ¿Pueden los ordenadores evaluar automáticamente preguntas abiertas?

    Full text link
    Tradicionalmente la sección de evaluación de la mayoría de los cursos on-line se basaba únicamente en preguntas de elección múltiple. Sin embargo, según la opinión generalizada de muchos investigadores, educadores y psicólogos restringirse exclusivamente a preguntas cerradas no permite evaluar completamente las habilidades cognitivas de los estudiantes. Esto ha tenido como consecuencia la creación del campo conocido como Evaluación Automática de preguntas abiertas. Lo que se plantea en este artículo es si realmente funciona. Esto es, ¿cuán fiable es un ordenador como evaluador automático de respuestas en texto libre escritas por estudiantes? Para dar respuesta a este interrogante hemos revisado la historia de este campo y visto como en los últimos años, estos sistemas han empezado a usarse como comprobadores de las notas puestas por los profesores, para asegurar una correcta evaluación. En todo caso no se puede olvidar que los ordenadores (al menos por ahora) no son más que máquinas sin sentido común ni inteligencia propia, lo que les impide enfrentarse con éxito a respuestas demasiado originales que quizás estén bien pero se salen de lo comúnmente establecido como correcto. El desafío está propuesto y, en nuestra opinión, mantener los objetivos realistas será lo que consiga que este campo avance, con paso firme, despejando cualquier incertidumbre sobre su validezEste trabajo ha sido financiado por el proyecto TIN2004-0314 del Ministerio de Educación y Ciencia español

    On the dynamic adaptation of Computer Assisted Assessment of free-text answers

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/11768012_54Proceedings of 4th International Conference, AH 2006, Dublin, Ireland, June 21-23, 2006.To our knowledge, every free-text Computer Assisted Assessment (CAA) system automatically scores the students and gives feedback to them according to their responses, but, none of them include yet personalization options. The free-text CAA system Atenea [1] had simple adaptation possibilities by keeping static student profiles [2]. In this paper, we present a new adaptive version called Willow. It is based on Atenea and adds the possibility of dynamically choosing the questions to be asked according to their difficulty level, the students’ profile and previous answers. Both Atenea and Willow have been tested with 32 students that manifested their satisfaction after using them. The results stimulate us to continue exploiting the possibilities of incorporating dynamic adaptation to free-text CAA.This work has been sponsored by Spanish Ministry of Science and Technology, project number TIN2004-0314
    • …
    corecore